This guide shares strategies and tactics for getting better results from GPTs. The methods described here can sometimes be deployed in combination for greater effect. We encourage experimentation to find the methods that work best for you.
本指南分享了提高GPT效果的策略和技巧。这里描述的方法有时可以组合使用以获得更好的效果。我们鼓励尝试不同的方法,找到最适合您的方法。
Some of the examples demonstrated here currently work only with our most capable model, gpt-4
. If you don't yet have access to gpt-4
consider joining the waitlist. In general, if you find that a GPT model fails at a task and a more capable model is available, it's often worth trying again with the more capable model.
这里展示的一些示例目前仅适用于我们最强大的模型 gpt-4
。如果您还没有 gpt-4
的访问权限,请考虑加入等待列表。通常情况下,如果您发现GPT模型在某项任务上失败,而更强大的模型可用,则通常值得尝试使用更强大的模型再次尝试。
GPTs can’t read your mind. If outputs are too long, ask for brief replies. If outputs are too simple, ask for expert-level writing. If you dislike the format, demonstrate the format you’d like to see. The less GPTs have to guess at what you want, the more likely you’ll get it.
GPT不能读取你的思想。如果输出太长,请要求简短的回复。如果输出太简单,请要求专家级的写作。如果你不喜欢格式,请展示你想要看到的格式。GPT越少猜测你想要什么,你得到它的可能性就越大。
Tactics: 策略:
GPTs can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs. In the same way that a sheet of notes can help a student do better on a test, providing reference text to GPTs can help in answering with fewer fabrications.
GPT可以自信地编造虚假答案,特别是在涉及深奥话题、引用和URL时。就像一张笔记纸可以帮助学生在考试中表现更好一样,向GPT提供参考文本可以帮助更少地编造答案。
Tactics: 策略:
Just as it is good practice in software engineering to decompose a complex system into a set of modular components, the same is true of tasks submitted to GPTs. Complex tasks tend to have higher error rates than simpler tasks. Furthermore, complex tasks can often be re-defined as a workflow of simpler tasks in which the outputs of earlier tasks are used to construct the inputs to later tasks.
就像在软件工程中将复杂系统分解为一组模块化组件一样,将任务提交给GPT也是同样的好习惯。复杂任务往往比简单任务具有更高的错误率。此外,复杂任务通常可以重新定义为一系列简单任务的工作流程,其中早期任务的输出用于构建后续任务的输入。
Tactics: 策略:
If asked to multiply 17 by 28, you might not know it instantly, but can still work it out with time. Similarly, GPTs make more reasoning errors when trying to answer right away, rather than taking time to work out an answer. Asking for a chain of reasoning before an answer can help GPTs reason their way toward correct answers more reliably.
如果让你计算17乘以28,你可能不会立刻知道答案,但是你可以花时间算出来。同样地,当GPT试图立即回答问题时,会出现更多的推理错误,而不是花时间计算出答案。在回答问题之前要求给出推理链,可以帮助GPT更可靠地推理出正确的答案。
Tactics: 策略:
Compensate for the weaknesses of GPTs by feeding them the outputs of other tools. For example, a text retrieval system can tell GPTs about relevant documents. A code execution engine can help GPTs do math and run code. If a task can be done more reliably or efficiently by a tool rather than by a GPT, offload it to get the best of both.
通过向GPT提供其他工具的输出来弥补其弱点。例如,文本检索系统可以告诉GPT相关文档的信息。代码执行引擎可以帮助GPT进行数学计算和代码运行。如果一个任务可以通过工具而不是GPT更可靠或更高效地完成,那么将其卸载以获得最佳效果。
Tactics: 策略:
Improving performance is easier if you can measure it. In some cases a modification to a prompt will achieve better performance on a few isolated examples but lead to worse overall performance on a more representative set of examples. Therefore to be sure that a change is net positive to performance it may be necessary to define a comprehensive test suite (also known an as an "eval").
如果您能够测量性能,那么提高性能就更容易了。在某些情况下,对提示进行修改可能会在一些孤立的示例上实现更好的性能,但会导致更具代表性的示例集上的整体性能变差。因此,为了确保更改对性能的净影响是积极的,可能需要定义一个全面的测试套件(也称为“评估”)。
Tactic: 策略:
Each of the strategies listed above can be instantiated with specific tactics. These tactics are meant to provide ideas for things to try. They are by no means fully comprehensive, and you should feel free to try creative ideas not represented here.
以上列出的每种策略都可以通过具体的战术来实现。这些战术旨在提供尝试的想法。它们并不是完全全面的,您应该随意尝试这里没有涵盖的创意想法。
In order to get a highly relevant response, make sure that requests provide any important details or context. Otherwise you are leaving it up to the model to guess what you mean.
为了获得高度相关的回复,请确保请求提供任何重要的细节或背景。否则,您将让模型猜测您的意思。
Worse 更糟的。 | Better 更好 |
How do I add numbers in Excel? 我该如何在Excel中添加数字? | How do I add up a row of dollar amounts in Excel? I want to do this automatically for a whole sheet of rows with all the totals ending up on the right in a column called "Total". 我该如何在Excel中累加一行美元金额?我想自动为整个工作表中的所有行执行此操作,所有总计都会出现在名为“总计”的列的右侧。 |
Who’s president? 谁是总统? | Who was the president of Mexico in 2021, and how frequently are elections held? 2021年墨西哥的总统是谁?选举有多频繁举行? |
Write code to calculate the Fibonacci sequence. 编写代码计算斐波那契数列。 | Write a TypeScript function to efficiently calculate the Fibonacci sequence. Comment the code liberally to explain what each piece does and why it's written that way. 编写一个TypeScript函数来高效地计算斐波那契数列。请在代码中添加注释,解释每个部分的作用以及为什么要这样编写。 |
Summarize the meeting notes. 总结会议记录。 | Summarize the meeting notes in a single paragraph. Then write a markdown list of the speakers and each of their key points. Finally, list the next steps or action items suggested by the speakers, if any. 请用一段话总结会议记录。然后,编写一个标记列表,列出每个发言者及其要点。最后,列出发言者建议的下一步行动或行动项目(如果有的话)。 |
The system message can be used to specify the persona used by the model in its replies.
系统消息可用于指定模型在回复中使用的人设。
Delimiters like triple quotation marks, XML tags, section titles, etc. can help demarcate sections of text to be treated differently.
像三重引号、XML标签、章节标题等分隔符可以帮助划分文本的不同部分以便进行不同的处理。
For straightforward tasks such as these, using delimiters might not make a difference in the output quality. However, the more complex a task is the more important it is to disambiguate task details. Don’t make GPTs work to understand exactly what you are asking of them.
对于这样简单的任务,使用分隔符可能不会对输出质量产生影响。然而,任务越复杂,消除歧义就越重要。不要让GPTs费力去理解你要求的具体内容。
Some tasks are best specified as a sequence of steps. Writing the steps out explicitly can make it easier for the model to follow them.
有些任务最好被规定为一系列步骤。明确地写出这些步骤可以让模型更容易地遵循它们。
Providing general instructions that apply to all examples is generally more efficient than demonstrating all permutations of a task by example, but in some cases providing examples may be easier. For example, if you intend for the model to copy a particular style of responding to user queries which is difficult to describe explicitly. This is known as "few-shot" prompting.
通常,提供适用于所有示例的一般说明比通过示例演示任务的所有排列组合更有效,但在某些情况下,提供示例可能更容易。例如,如果您打算让模型复制一种难以明确描述的特定用户查询响应风格。这被称为“少样本”提示。
You can ask the model to produce outputs that are of a given target length. The targeted output length can be specified in terms of the count of words, sentences, paragraphs, bullet points, etc. Note however that instructing the model to generate a specific number of words does not work with high precision. The model can more reliably generate outputs with a specific number of paragraphs or bullet points.
您可以要求模型生成指定目标长度的输出。目标输出长度可以根据单词、句子、段落、项目符号等进行指定。但请注意,指示模型生成特定数量的单词并不具有高精度。模型可以更可靠地生成具有特定段落数或项目符号的输出。
If we can provide a model with trusted information that is relevant to the current query, then we can instruct the model to use the provided information to compose its answer.
如果我们能够为模型提供与当前查询相关的可信信息,那么我们就可以指示模型使用提供的信息来组成其答案。
Given that GPTs have limited context windows, in order to apply this tactic we need some way to dynamically lookup information that is relevant to the question being asked. Embeddings can be used to implement efficient knowledge retrieval. See the tactic "Use embeddings-based search to implement efficient knowledge retrieval" for more details on how to implement this.
鉴于GPT的上下文窗口有限,为了应用这种策略,我们需要一些动态查找与所提问问题相关信息的方法。嵌入可以用于实现高效的知识检索。有关如何实现此操作的更多详细信息,请参见“使用基于嵌入的搜索实现高效的知识检索”策略。
If the input has been supplemented with relevant knowledge, it's straightforward to request that the model add citations to its answers by referencing passages from provided documents. Note that citations in the output can then be verified programmatically by string matching within the provided documents.
如果输入已经补充了相关知识,那么请求模型通过引用提供的文档中的段落来为其答案添加引用就很简单了。请注意,输出中的引用可以通过在提供的文档中进行字符串匹配来进行程序验证。
For tasks in which lots of independent sets of instructions are needed to handle different cases, it can be beneficial to first classify the type of query and to use that classification to determine which instructions are needed. This can be achieved by defining fixed categories and hardcoding instructions that are relevant for handling tasks in a given category. This process can also be applied recursively to decompose a task into a sequence of stages. The advantage of this approach is that each query will contain only those instructions that are required to perform the next stage of a task which can result in lower error rates compared to using a single query to perform the whole task. This can also result in lower costs since larger prompts cost more to run (see pricing information).
对于需要处理不同情况的大量独立指令集的任务,首先对查询类型进行分类并使用该分类确定所需的指令可能是有益的。这可以通过定义固定类别和硬编码与处理给定类别任务相关的指令来实现。这个过程也可以递归地应用于将任务分解为一系列阶段。这种方法的优点是每个查询仅包含执行任务下一阶段所需的指令,这可能会导致与使用单个查询执行整个任务相比较低的错误率。这也可能会导致更低的成本,因为更大的提示费用更高(请参阅定价信息)。
Suppose for example that for a customer service application, queries could be usefully classified as follows:
例如,假设对于一个客户服务应用程序,查询可以有以下有用的分类:
Based on the classification of the customer query, a set of more specific instructions can be provided to a GPT model to handle next steps. For example, suppose the customer requires help with "troubleshooting".
根据客户查询的分类,可以为GPT模型提供一组更具体的指令来处理下一步。例如,假设客户需要“故障排除”帮助。
Notice that the model has been instructed to emit special strings to indicate when the state of the conversation changes. This enables us to turn our system into a state machine where the state determines which instructions are injected. By keeping track of state, what instructions are relevant at that state, and also optionally what state transitions are allowed from that state, we can put guardrails around the user experience that would be hard to achieve with a less structured approach.
请注意,该模型已被指示发出特殊字符串以指示对话状态何时发生变化。这使我们能够将我们的系统转变为状态机,其中状态确定注入哪些指令。通过跟踪状态,确定在该状态下哪些指令是相关的,以及可选地从该状态允许哪些状态转换,我们可以在用户体验周围设置防护栏,这是通过较少结构化的方法难以实现的。
Since GPTs have a fixed context length, dialogue between a user and an assistant in which the entire conversation is included in the context window cannot continue indefinitely.
由于GPT模型具有固定的上下文长度,因此在用户和助手之间的对话中,如果整个对话都包含在上下文窗口中,那么对话将无法无限制地继续下去。
There are various workarounds to this problem, one of which is to summarize previous turns in the conversation. Once the size of the input reaches a predetermined threshold length, this could trigger a query that summarizes part of the conversation and the summary of the prior conversation could be included as part of the system message. Alternatively, prior conversation could be summarized asynchronously in the background throughout the entire conversation.
有多种解决这个问题的方法,其中之一是总结先前对话中的内容。一旦输入的长度达到预定的阈值,这可能会触发一个查询,总结部分对话内容,之前对话的摘要可以作为系统消息的一部分包含在内。或者,之前的对话可以在整个对话期间异步地在后台进行总结。
An alternative solution is to dynamically select previous parts of the conversation that are most relevant to the current query. See the tactic "Use embeddings-based search to implement efficient knowledge retrieval".
另一种解决方案是动态选择与当前查询最相关的先前对话部分。请参阅策略“使用基于嵌入的搜索实现高效的知识检索”。
Since GPTs have a fixed context length, they cannot be used to summarize a text longer than the context length minus the length of the generated summary in a single query.
由于GPT模型具有固定的上下文长度,因此在单个查询中,它们无法用于总结超过上下文长度减去生成摘要长度的文本。
To summarize a very long document such as a book we can use a sequence of queries to summarize each section of the document. Section summaries can be concatenated and summarized producing summaries of summaries. This process can proceed recursively until an entire document is summarized. If it’s necessary to use information about earlier sections in order to make sense of later sections, then a further trick that can be useful is to include a running summary of the text that precedes any given point in the book while summarizing content at that point. The effectiveness of this procedure for summarizing books has been studied in previous research by OpenAI using variants of GPT-3.
为了总结一本非常长的文档,比如一本书,我们可以使用一系列的查询来总结文档的每个部分。将各个部分的摘要连接起来,再进行总结,就可以得到摘要的摘要。这个过程可以递归进行,直到整个文档被总结。如果需要使用早期部分的信息来理解后面的部分,那么一个有用的技巧是在总结当前内容的同时,包括在书中给定点之前的文本的运行摘要。OpenAI曾经使用GPT-3的变体进行过研究,以评估这种总结书籍的方法的有效性。
Sometimes we get better results when we explicitly instruct the model to reason from first principles before coming to a conclusion. Suppose for example we want a model to evaluate a student’s solution to a math problem. The most obvious way to approach this is to simply ask the model if the student's solution is correct or not.
有时候,我们明确地指示模型在得出结论之前从第一原理出发进行推理,会得到更好的结果。例如,假设我们想让一个模型评估学生解决数学问题的解决方案。最明显的方法是简单地询问模型学生的解决方案是否正确。
But the student's solution is actually not correct! We can get the model to successfully notice this by prompting the model to generate its own solution first.
但是学生的解决方案实际上是不正确的!我们可以通过提示模型首先生成自己的解决方案来成功地让模型注意到这一点。
The previous tactic demonstrates that it is sometimes important for the model to reason in detail about a problem before answering a specific question. For some applications, the reasoning process that a model uses to arrive at a final answer would be inappropriate to share with the user. For example, in tutoring applications we may want to encourage students to work out their own answers, but a model’s reasoning process about the student’s solution could reveal the answer to the student.
前面的策略表明,有时模型在回答具体问题之前,详细地推理问题是非常重要的。对于某些应用程序,模型用于得出最终答案的推理过程可能不适合与用户共享。例如,在辅导应用程序中,我们可能希望鼓励学生自己解决问题,但是模型对学生解决方案的推理过程可能会向学生透露答案。
Inner monologue is a tactic that can be used to mitigate this. The idea of inner monologue is to instruct the model to put parts of the output that are meant to be hidden from the user into a structured format that makes parsing them easy. Then before presenting the output to the user, the output is parsed and only part of the output is made visible.
内心独白是一种可以用来缓解这种情况的策略。内心独白的想法是指示模型将那些用户不应该看到的输出部分放入结构化格式中,以便于解析。然后在向用户呈现输出之前,对输出进行解析,只有部分输出被展示给用户。
Alternatively, this can be achieved with a sequence of queries in which all except the last have their output hidden from the end user.
或者,可以通过一系列查询来实现,其中除了最后一个查询以外,所有查询的输出都对最终用户隐藏。
First, we can ask the model to solve the problem on its own. Since this initial query doesn't require the student’s solution, it can be omitted. This provides the additional advantage that there is no chance that the model’s solution will be biased by the student’s attempted solution.
首先,我们可以要求模型自行解决问题。由于这个初始查询不需要学生的解决方案,因此可以省略。这提供了额外的优势,即模型的解决方案不会受到学生尝试解决方案的影响。
Next, we can have the model use all available information to assess the correctness of the student’s solution.
接下来,我们可以让模型利用所有可用信息来评估学生的解决方案的正确性。
Finally, we can let the model use its own analysis to construct a reply in the persona of a helpful tutor.
最终,我们可以让模型使用自己的分析能力,以乐于助人的导师角色构建回复。
Suppose that we are using a model to list excerpts from a source which are relevant to a particular question. After listing each excerpt the model needs to determine if it should start writing another or if it should stop. If the source document is large, it is common for a model to stop too early and fail to list all relevant excerpts. In that case, better performance can often be obtained by prompting the model with followup queries to find any excerpts it missed on previous passes.
假设我们正在使用一个模型来列出与特定问题相关的源摘录。在列出每个摘录后,模型需要确定它是否应该开始写另一个摘录,还是停止。如果源文档很大,模型通常会过早停止并未能列出所有相关的摘录。在这种情况下,通过提示模型进行后续查询以查找之前漏掉的摘录,通常可以获得更好的性能。
A model can leverage external sources of information if provided as part of its input. This can help the model to generate more informed and up-to-date responses. For example, if a user asks a question about a specific movie, it may be useful to add high quality information about the movie (e.g. actors, director, etc…) to the model’s input. Embeddings can be used to implement efficient knowledge retrieval, so that relevant information can be added to the model input dynamically at run-time.
如果外部信息作为模型的输入之一提供,模型可以利用这些信息。这可以帮助模型生成更具见解和更新的响应。例如,如果用户询问有关特定电影的问题,将有关该电影的高质量信息(例如演员、导演等)添加到模型的输入中可能会很有用。可以使用嵌入来实现有效的知识检索,以便在运行时动态地将相关信息添加到模型输入中。
A text embedding is a vector that can measure the relatedness between text strings. Similar or relevant strings will be closer together than unrelated strings. This fact, along with the existence of fast vector search algorithms means that embeddings can be used to implement efficient knowledge retrieval. In particular, a text corpus can be split up into chunks, and each chunk can be embedded and stored. Then a given query can be embedded and vector search can be performed to find the embedded chunks of text from the corpus that are most related to the query (i.e. closest together in the embedding space).
文本嵌入是一种向量,可以衡量文本字符串之间的相关性。相似或相关的字符串将比不相关的字符串更接近。这个事实,加上快速向量搜索算法的存在,意味着嵌入可以用于实现高效的知识检索。特别是,文本语料库可以分成块,每个块可以嵌入和存储。然后,可以嵌入给定的查询,并执行向量搜索,以找到与查询最相关的语料库中嵌入的文本块(即在嵌入空间中最接近的文本块)。
Example implementations can be found in the OpenAI Cookbook. See the tactic “Instruct the model to use retrieved knowledge to answer queries” for an example of how to use knowledge retrieval to minimize the likelihood that a model will make up incorrect facts.
在OpenAI Cookbook中可以找到示例实现。请参见策略“指导模型使用检索到的知识来回答查询”的示例,了解如何使用知识检索来最小化模型编造不正确事实的可能性。
GPTs cannot be relied upon to perform arithmetic or long calculations accurately on their own. In cases where this is needed, a model can be instructed to write and run code instead of making its own calculations. In particular, a model can be instructed to put code that is meant to be run into a designated format such as triple backtics. After an output is produced, the code can be extracted and run. Finally, if necessary, the output from the code execution engine (i.e. Python interpreter) can be provided as an input to the model for the next query.
GPTs不能单独准确地执行算术或长时间的计算。在需要这样做的情况下,可以指示模型编写和运行代码,而不是进行自己的计算。特别是,可以指示模型将要运行的代码放入指定的格式,例如三个反引号。生成输出后,可以提取并运行代码。最后,如果必要,代码执行引擎(即Python解释器)的输出可以作为下一个查询的输入提供给模型。
Another good use case for code execution is calling external APIs. If a model is instructed in the proper use of an API, it can write code that makes use of it. A model can be instructed in how to use an API by providing it with documentation and/or code samples showing how to use the API.
代码执行的另一个好用例是调用外部API。如果模型被指导正确使用API,它可以编写利用它的代码。可以通过提供文档和/或代码示例来指导模型如何使用API。
WARNING: Executing code produced by a model is not inherently safe and precautions should be taken in any application that seeks to do this. In particular, a sandboxed code execution environment is needed to limit the harm that untrusted code could cause.
警告:执行由模型生成的代码本质上并不安全,因此在任何试图执行此操作的应用程序中都应采取预防措施。特别是需要一个沙盒式代码执行环境来限制不受信任的代码可能造成的危害。
Sometimes it can be hard to tell whether a change — e.g., a new instruction or a new design — makes your system better or worse. Looking at a few examples may hint at which is better, but with small sample sizes it can be hard to distinguish between a true improvement or random luck. Maybe the change helps performance on some inputs, but hurts performance on others.
有时候很难判断一个变化——比如新的指令或新的设计——是让你的系统变得更好还是更糟。看一些例子可能会暗示哪个更好,但是在样本数量很小的情况下,很难区分真正的改进还是随机的运气。也许这个变化会提高某些输入的性能,但会降低其他输入的性能。
Evaluation procedures (or "evals") are useful for optimizing system designs. Good evals are:
评估程序(或“评估”)对于优化系统设计非常有用。好的评估应该具备以下特点:
Difference to detect 检测的差异 | Sample size needed for 95% confidence 95%置信度所需的样本量 |
---|---|
30% | ~10 |
10% | ~100 |
3% | ~1,000 |
1% | ~10,000 |
Evaluation of outputs can be done by computers, humans, or a mix. Computers can automate evals with objective criteria (e.g., questions with single correct answers) as well as some subjective or fuzzy criteria, in which model outputs are evaluated by other model queries. OpenAI Evals is an open-source software framework that provides tools for creating automated evals.
输出结果的评估可以由计算机、人类或二者混合进行。计算机可以使用客观标准(例如,具有单一正确答案的问题)自动化评估,以及一些主观或模糊标准,在这些标准中,模型输出由其他模型查询进行评估。OpenAI Evals是一个开源软件框架,提供创建自动化评估工具的工具。
Model-based evals can be useful when there exists a range of possible outputs that would be considered equally high in quality (e.g. for questions with long answers). The boundary between what can be realistically evaluated with a model-based eval and what requires a human to evaluate is fuzzy and is constantly shifting as models become more capable. We encourage experimentation to figure out how well model-based evals can work for your use case.
当存在一系列可能被认为具有同等高质量的输出时(例如对于需要长答案的问题),基于模型的评估可以是有用的。基于模型的评估能够实际评估的范围与需要人工评估的范围之间的界限模糊不清,并且随着模型变得更加强大,这种界限不断变化。我们鼓励进行实验,以确定基于模型的评估在您的用例中能够发挥多大的作用。
Suppose it is known that the correct answer to a question should make reference to a specific set of known facts. Then we can use a model query to count how many of the required facts are included in the answer.
假设已知一个问题的正确答案应该涉及到一组特定的已知事实。那么我们可以使用一个模型查询来计算答案中包含了多少必要的事实。
For example, using the following system message:
例如,使用以下系统消息:
Here's an example input where both points are satisfied:
以下是一个满足两个条件的输入示例:
Here's an example input where only one point is satisfied:
这里是一个只满足一个点的输入示例:
Here's an example input where none are satisfied:
以下是一个没有满足条件的输入示例:
There are many possible variants on this type of model-based eval. Consider the following variation which tracks the kind of overlap between the candidate answer and the gold-standard answer, and also tracks whether the candidate answer contradicts any part of the gold-standard answer.
这种基于模型的评估方法有许多可能的变体。考虑以下变体,它跟踪候选答案与黄金标准答案之间的重叠程度,并跟踪候选答案是否与黄金标准答案的任何部分相矛盾。
Here's an example input with a substandard answer which nonetheless does not contradict the expert answer:
这是一个示例输入,其中包含一个次标准的答案,但仍不与专家答案相矛盾:
Here's an example input with answer that directly contradicts the expert answer:
这是一个输入示例,其答案直接与专家答案相矛盾:
Here's an example input with a correct answer that also provides a bit more detail than is necessary:
这是一个示例输入,正确答案还提供了比必要的更多细节:
For more inspiration, visit the OpenAI Cookbook, which contains example code and also links to third-party resources such as:
为了获得更多灵感,请访问OpenAI Cookbook,其中包含示例代码以及链接到第三方资源,例如: